- Anker's newest USB-C cables are a smart way to future-proof your tech
- Deal alert: Our favorite noise-canceling headphones of 2024 are at their lowest price ever for Black Friday
- Enhancing Container Security with Docker Scout and Secure Repositories | Docker
- Where to find the best Linux support, no matter your skill level: 5 options
- Why the Meta Quest 3S is the ultimate 2024 holiday present
Ethernet innovation pits power against speed
While the move to 400G Ethernet has so far been a largely hyperscaler and telco-network event, the ambition for those users, as well as data-center customers is ultimately to move to at least 800Gbps and possibly 1.6Tbps.
And while 800Gbps seems to be a solid goal for Ethernet networking visionaries, the challenges—such as the optics, power, and architecture required to make the next speed leap—seem formidable.
The need for increased speed in data centers and cloud services is driven by myriad things including the continued growth of hyperscale networks from players like Google, Amazon and Facebook, but also the more distributed cloud, artificial intelligence, video, and mobile-application workloads that current and future networks will support.
Another driver is that global IP traffic is expected to grow to 396 exabytes per month by 2022, up from 177EB in 2017, according to the IEEE 802.3 Industry Connections ethernet Bandwidth Assessment from April 2020. Underlying factors, such as increasing number of users, increasing access rates and methods, and increasing services, all point to continuing growth in demand for bandwidth, the report stated.
And there has been significant industry activity to move faster Ethernet technologies along. For example, the Institute of Electrical and Electronics Engineers (IEEE) and the IEEE Standards Association at the end of 2020 formed the IEEE 802.3 Beyond 400 Gbps ethernet Study Group.
“The path to beyond 400G Ethernet exists, but there are a host of options and physical challenges that will need to be considered to take the next leap in speed rate for Ethernet,” said John D’Ambrosia, Distinguished Engineer, Futurewei Technologies, in a statement at the group’s formation.
Also late last year the Optical Internetworking Forum (OIF) set up new projects around higher speed Ethernet including the 800G Coherent project. That effort is looking to define interoperable 800G coherent line specifications—which basically define how higher-speed switch gear communicates over long distances—for campus and Data Center Interconnect applications, according to Tad Hofmeister, technical lead, Optical Networking Technologies at Google and OIF vice president.
This week D’Ambrosia and Hofmeister were part of group of experts from industry bellwethers including Cisco, Juniper, Google, Facebook, and Microsoft brought together for the Ethernet Alliance’s Technology Exploration Forum (TEF) to looke at issues and requirements around setting next-generation Ethernet rates.
One overarching challenge for moving beyond 400Gbps is the power required to drive those systems.
“Power is growing at an unsustainable rate. Power is the problem to solve because it limits what we can build and deploy as well as what our planet can sustain,” Rakesh Chopra, a Cisco Fellow told the TEF. “Power-per-bit has always been improving—we can increase bandwidth by 80x—but power required for that goes up 22x. Every watt we consume in the network, that’s less in servers we can deploy. It’s not a question about how small you can crunch equipment but more how efficient can you be.”
Power is one of the major constraints for speeds beyond 400G, said Sameh Boujelbene, senior director at Dell’Oro Group. “Power is already impacting how hyperscalers role out higher speeds because they need to wait for different pieces of technology to work efficiently within their existing power budget, and that issue only grows with higher speeds.”
The big question is whether we hit the wall on bandwidth or power first, said Brad Booth, principal hardware engineer with Microsoft’s Azure Hardware Systems Group. “If we continue using the same technologies we use today we would flatline on the powerband. As we need more and more power, we have a power limitation. We have to rely on what’s being built and what’s available through the infrastructures we support.”
Many industry and research organizations like DARPA and others are looking at how to build greater bandwidth density with improved power, Booth noted.
And that will require creative answers. “Future data-center networks may require a combination of photonic innovation and optimized network architectures,” Boujelbene said.
One of those potential innovations called co-packaged optics (CPO) is under development by Broadcom, Cisco, Intel, and others, but it is still a nascent field. CPO ties currently separate optics and switch silicon together into one package with the goal of significantly reducing power consumption.
“CPO provides the next big step in power reduction and offers power and density savings to support next-generation system scaling,” said Rob Stone, Technical Sourcing Manager with Facebook. Stone is also the technical working group chair of the Ethernet Technology Consortium that announced completion of a specification for 800GbE. “What is needed is a standards-supported CPO ecosystem for wide adoption.”
Facebook and Microsoft are co-developing a CPO specification “to address the challenge of data-center traffic growth by reducing the power consumption of the switch-optic electrical interface,” the companies stated on their CPO website. “A common, publicly available system specification is required to guide optical and switch vendors to quickly develop co-packaged solutions and allow for the creation of a diverse ecosystem of suppliers.”
The OIF is also working on a Co-Packaging Framework, a specification that will incorporate application spaces and relevant technology considerations for co-packaging communication interfaces with one or more ASICs. A primary objective of the specification is to identify new opportunities for interoperability standards for possible future work at the OIF or other standards organizations, Hofmeister stated.
CPO has a long road ahead of it though, experts say. “Architecting, designing, deploying, and operationalizing systems with CPO is an incredibly difficult task, and therefore it is critical as an industry that we start before it’s too late,” Cisco’s Chopra wrote in a recent blog about CPO. “Today in the service provider and web-scale networks most links outside of the rack are optical while wiring within the rack is copper. As speeds increase the longest copper links need to move to optical. Eventually, all the links leaving a silicon package will be optical rather than electrical.”
“While it is getting harder to go faster, it’s an open question whether we can build systems supporting the next rate and higher densities the way we are used to,” said David Ofelt, an engineer with Juniper Networks. “Even if we can, it is not clear the result will be acceptable to the end user.”
It will be many years before technology to support the faster ethernet rates will be available in volume with suitable packaging and system support, Ofelt said. “It’s not a standards-are-slow thing, it is a reality-of-building-an-ecosystem-at-scale thing,” he said.
Part of the challenge of moving to higher speeds en masse may be that there are widely staggered adoption rates.
For example, the majority of enterprise customers will move from 10G to 25G in the next two-to-five years, and 50G to 100G will be the next speed for many of them. Expectations on wireless at the network edge may change that said Vlad Kozlov, founder and CEO of the LightCounting research firm. “Companies relying heavily on digital services or providing them will move from 100G to 400G in the next two-to-five years. 800G or 1.6T will be the next speeds for them. However, bandwidth hungry AI services may change this situation in the future: the majority of businesses will need faster connectivity to transmit videos monitoring their operations.”
In the end what’s really needed is a flexible underlying architecture to support future bandwidth demand beyond 400Gbps, D’Ambrosia told the TEF. “There’s a lot of work to do, and we’ve really just started.”
Copyright © 2021 IDG Communications, Inc.